Exercise 1: Finding Michel electrons¶
Michel electrons come out of a muon decay. They are electrons, so they look like an electromagnetic shower coming out of a long muon track.
Stages of the chain required: UResNet only!
Motivation¶
Michel electrons are used in LArTPC experiments as “candles”: we understand their energy spectrum very well, so they can be used to calibrate the detector and to compare the quality of different detectors. Michel electrons spectrum is one of the first “tokens” that an experiment will show to prove that the detector is running and we can successfully reconstruct the data. Since their energy is in a range up to ~50MeV, they are also representative of the detector response to electromagnetic particles generated by neutrino activity.
Fig. 9 Example of Michel electron spectrum¶
What are we looking for?¶
Fig. 10 Energy loss per unit distance (MeV/cm) for electrons traveling in liquid argon. From [AAA+17].¶
The primary Michel electron that comes out of the muon decay-at-rest can lose energy either through ionization (collision stopping power), or through the production of radiative photons via Bremsstrahlung (radiative stopping power). Beyond a certain energy, the radiative stopping power is greater than the collision stopping power. If the radiative photons have enough energy, they can pair-produce, i.e. turn into an electron-positron pair. They, in turn, can produce new radiative photons, and so on. They can also undergo Compton scattering. A cascade of electrons and photons (electromagnetic shower) happens.
Ionization produces track-like energy depositions, whereas the photons can travel some distance before converting into a secondary electron. Hence Michel electrons have two clear topological features: a primary ionization, which is track-like at the end of the muon track, and some scattered energy deposits much further away which come from these radiative photons.
Fig. 11 Example of Michel electron topology¶
In this exercise, we are only concerned with finding the primary ionization of the Michel electron. We will purposefully ignore for now the radiative photons.
Setup¶
If needed, you can edit the path to lartpc_mlreco3d library and to the data folder.
import os
SOFTWARE_DIR = '%s/lartpc_mlreco3d' % os.environ.get('HOME')
DATA_DIR = os.environ.get('DATA_DIR')
The usual imports and setting the right PYTHON_PATH… click if you need to see them.
import sys, os
# set software directory
sys.path.insert(0, SOFTWARE_DIR)
import numpy as np
import yaml
import torch
import plotly
import plotly.graph_objs as go
from plotly.offline import iplot, init_notebook_mode
init_notebook_mode(connected=False)
from mlreco.visualization import scatter_points, plotly_layout3d
from mlreco.visualization.gnn import scatter_clusters, network_topology, network_schematic
from mlreco.utils.ppn import uresnet_ppn_type_point_selector
from mlreco.utils.cluster.dense_cluster import fit_predict_np, gaussian_kernel
from mlreco.main_funcs import process_config, prepare
from mlreco.utils.gnn.cluster import get_cluster_label
from mlreco.utils.deghosting import adapt_labels_numpy as adapt_labels
from mlreco.visualization.gnn import network_topology
from larcv import larcv
/usr/local/lib/python3.6/dist-packages/MinkowskiEngine/__init__.py:42: UserWarning:
The environment variable `OMP_NUM_THREADS` not set. MinkowskiEngine will automatically set `OMP_NUM_THREADS=16`. If you want to set `OMP_NUM_THREADS` manually, please export it on the command line before running a python script. e.g. `export OMP_NUM_THREADS=12; python your_program.py`. It is recommended to set it below 24.
Welcome to JupyROOT 6.22/02
The configuration is loaded from the file inference.cfg.
cfg=yaml.load(open('%s/inference.cfg' % DATA_DIR, 'r').read().replace('DATA_DIR', DATA_DIR),Loader=yaml.Loader)
# pre-process configuration (checks + certain non-specified default settings)
process_config(cfg)
# prepare function configures necessary "handlers"
hs=prepare(cfg)
Config processed at: Linux volt004 3.10.0-1160.21.1.el7.x86_64 #1 SMP Tue Mar 16 18:28:22 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$CUDA_VISIBLE_DEVICES="0"
{ 'iotool': { 'batch_size': 4,
'collate_fn': 'CollateSparse',
'dataset': { 'data_keys': [ '/sdf/home/l/ldomine/lartpc_mlreco3d_tutorials/book/data/wire_mpvmpr_2020_04_test_small.root'],
'limit_num_files': 10,
'name': 'LArCVDataset',
'schema': { 'cluster_label': [ 'parse_cluster3d_clean_full',
'cluster3d_pcluster',
'particle_pcluster',
'particle_mpv',
'sparse3d_pcluster_semantics'],
'input_data': [ 'parse_sparse3d_scn',
'sparse3d_reco',
'sparse3d_reco_chi2'],
'kinematics_label': [ 'parse_cluster3d_kinematics_clean',
'cluster3d_pcluster',
'particle_corrected',
'particle_mpv',
'sparse3d_pcluster_semantics'],
'particle_graph': [ 'parse_particle_graph_corrected',
'particle_corrected',
'cluster3d_pcluster'],
'particles_asis': [ 'parse_particle_asis',
'particle_pcluster',
'cluster3d_pcluster'],
'particles_label': [ 'parse_particle_points_with_tagging',
'sparse3d_pcluster',
'particle_corrected'],
'segment_label': [ 'parse_sparse3d_scn',
'sparse3d_pcluster_semantics_ghost']}},
'minibatch_size': 4,
'num_workers': 4,
'shuffle': False},
'model': { 'loss_input': [ 'segment_label',
'particles_label',
'cluster_label',
'kinematics_label',
'particle_graph'],
'modules': { 'chain': { 'enable_cnn_clust': True,
'enable_cosmic': True,
'enable_ghost': True,
'enable_gnn_inter': True,
'enable_gnn_kinematics': True,
'enable_gnn_particle': False,
'enable_gnn_shower': True,
'enable_gnn_track': True,
'enable_ppn': True,
'enable_uresnet': True,
'use_ppn_in_gnn': True,
'verbose': True},
'cosmic_discriminator': { 'network_base': { 'data_dim': 3,
'features': 4,
'leakiness': 0.33,
'spatial_size': 768},
'res_encoder': { 'coordConv': True,
'latent_size': 2,
'pool_mode': 'avg'},
'uresnet_encoder': { 'features': 16,
'filters': 16,
'num_strides': 9},
'use_input_data': False,
'use_true_interactions': False},
'cosmic_loss': { 'node_loss': { 'balance_classes': True,
'name': 'type',
'target_col': 8}},
'dbscan_frag': { 'cluster_classes': [0, 2, 3],
'delta_label': 3,
'dim': 3,
'eps': [ 1.999,
3.999,
1.999,
4.999],
'michel_label': 2,
'min_samples': 1,
'min_size': [3, 3, 3, 3],
'num_classes': 4,
'ppn_distance_threshold': 1.999,
'ppn_mask_radius': 5,
'ppn_score_threshold': 0.9,
'ppn_type_threshold': 0.3,
'track_clustering_method': 'closest_path',
'track_label': 1},
'full_chain_loss': { 'clustering_weight': 1.0,
'cosmic_weight': 1.0,
'flow_weight': 1.0,
'inter_gnn_weight': 1.0,
'kinematics_p_weight': 1.0,
'kinematics_type_weight': 1.0,
'kinematics_weight': 1.0,
'particle_gnn_weight': 1.0,
'ppn_weight': 1.0,
'segmentation_weight': 1.0,
'shower_gnn_weight': 1.0,
'track_gnn_weight': 1.0},
'grappa_inter': { 'base': { 'add_start_dir': True,
'add_start_point': True,
'group_pred': 'score',
'kinematics_mlp': True,
'kinematics_momentum': False,
'kinematics_type': True,
'node_min_size': 3,
'node_type': [ 0,
1,
2,
3],
'start_dir_max_dist': 5,
'vertex_mlp': True},
'edge_encoder': { 'name': 'geo',
'use_numpy': False},
'gnn_model': { 'aggr': 'add',
'edge_classes': 2,
'edge_feats': 19,
'edge_output_feats': 64,
'leakiness': 0.1,
'name': 'meta',
'node_classes': 2,
'node_feats': 28,
'node_output_feats': 64,
'num_mp': 3},
'node_encoder': { 'name': 'geo',
'use_numpy': False},
'type_net': { 'num_hidden': 32},
'vertex_net': { 'num_hidden': 32}},
'grappa_inter_loss': { 'edge_loss': { 'balance_classes': False,
'high_purity': False,
'loss': 'CE',
'name': 'channel',
'reduction': 'sum',
'source_col': 6,
'target': 'group',
'target_col': 7},
'node_loss': { 'balance_classes': True,
'name': 'kinematics',
'spatial_size': 768,
'type_loss': 'CE'}},
'grappa_kinematics': { 'base': { 'edge_dist_metric': 'set',
'edge_dist_numpy': True,
'edge_max_dist': -1,
'kinematics_mlp': True,
'kinematics_momentum': True,
'kinematics_type': False,
'network': 'complete',
'node_min_size': -1,
'node_type': -1},
'edge_encoder': { 'cnn_encoder': { 'name': 'cnn2',
'network_base': { 'data_dim': 3,
'features': 4,
'leakiness': 0.33,
'spatial_size': 768},
'res_encoder': { 'coordConv': True,
'latent_size': 32,
'pool_mode': 'avg'},
'uresnet_encoder': { 'filters': 32,
'input_kernel': 3,
'num_classes': 5,
'num_filters': 32,
'num_strides': 9,
'reps': 2}},
'geo_encoder': { 'more_feats': True},
'name': 'mix_debug',
'normalize': True},
'gnn_model': { 'aggr': 'add',
'edge_classes': 2,
'edge_feats': 51,
'edge_output_feats': 64,
'leak': 0.33,
'name': 'nnconv_old',
'node_classes': 5,
'node_feats': 83,
'node_output_feats': 128,
'num_mp': 3},
'momentum_net': { 'num_hidden': 32},
'node_encoder': { 'cnn_encoder': { 'name': 'cnn2',
'network_base': { 'data_dim': 3,
'features': 4,
'leakiness': 0.33,
'spatial_size': 768},
'res_encoder': { 'coordConv': True,
'latent_size': 64,
'pool_mode': 'avg'},
'uresnet_encoder': { 'filters': 32,
'input_kernel': 3,
'num_classes': 5,
'num_filters': 16,
'num_strides': 9,
'reps': 2}},
'geo_encoder': { 'more_feats': True},
'name': 'mix_debug',
'normalize': True},
'use_true_particles': False},
'grappa_kinematics_loss': { 'edge_loss': { 'balance_classes': False,
'high_purity': False,
'name': 'channel',
'reduction': 'sum',
'target': 'particle_forest'},
'node_loss': { 'name': 'kinematics',
'reg_loss': 'l2'}},
'grappa_particle': { 'base': { 'node_min_size': 10,
'node_type': -1},
'edge_encoder': { 'name': 'geo',
'use_numpy': True},
'gnn_model': { 'aggr': 'add',
'edge_classes': 2,
'edge_feats': 19,
'edge_output_feats': 64,
'leakiness': 0.1,
'name': 'meta',
'node_classes': 2,
'node_feats': 24,
'node_output_feats': 64,
'num_mp': 3},
'node_encoder': { 'name': 'geo',
'use_numpy': True}},
'grappa_particle_loss': { 'edge_loss': { 'balance_classes': False,
'high_purity': True,
'loss': 'CE',
'name': 'channel',
'reduction': 'sum',
'source_col': 5,
'target': 'group',
'target_col': 6},
'node_loss': { 'balance_classes': False,
'group_pred_alg': 'score',
'high_purity': True,
'loss': 'CE',
'name': 'primary',
'reduction': 'sum',
'use_group_pred': True}},
'grappa_shower': { 'base': { 'add_start_dir': True,
'add_start_point': True,
'node_min_size': 3,
'node_type': 0,
'start_dir_max_dist': 5},
'edge_encoder': { 'name': 'geo',
'use_numpy': False},
'gnn_model': { 'aggr': 'add',
'edge_classes': 2,
'edge_feats': 19,
'edge_output_feats': 64,
'leakiness': 0.1,
'name': 'meta',
'node_classes': 2,
'node_feats': 28,
'node_output_feats': 64,
'num_mp': 3},
'node_encoder': { 'name': 'geo',
'use_numpy': False}},
'grappa_shower_loss': { 'edge_loss': { 'high_purity': True,
'name': 'channel',
'source_col': 5,
'target_col': 6},
'node_loss': { 'group_pred_alg': 'score',
'high_purity': True,
'name': 'primary',
'use_group_pred': True}},
'grappa_track': { 'base': { 'add_start_dir': True,
'add_start_point': True,
'node_min_size': 3,
'node_type': 1,
'start_dir_max_dist': 5},
'edge_encoder': { 'name': 'geo',
'use_numpy': False},
'gnn_model': { 'aggr': 'add',
'edge_classes': 2,
'edge_feats': 19,
'edge_output_feats': 64,
'leakiness': 0.1,
'name': 'meta',
'node_classes': 2,
'node_feats': 28,
'node_output_feats': 64,
'num_mp': 3},
'node_encoder': { 'name': 'geo',
'use_numpy': False}},
'grappa_track_loss': { 'edge_loss': { 'high_purity': False,
'name': 'channel',
'source_col': 5,
'target_col': 6}},
'spice': { 'fragment_clustering': { 'cluster_all': False,
'cluster_classes': [ 1],
'min_frag_size': 10,
'min_voxels': 2,
'p_thresholds': [ 0.95,
0.95,
0.95,
0.95],
's_thresholds': [ 0.0,
0.0,
0.0,
0.35]},
'network_base': { 'data_dim': 3,
'features': 4,
'leakiness': 0.33,
'spatial_size': 768},
'spatial_embeddings': { 'coordConv': True,
'embedding_dim': 3,
'seediness_dim': 1,
'sigma_dim': 1},
'uresnet': { 'filters': 64,
'input_kernel_size': 7,
'num_strides': 7,
'reps': 2}},
'spice_loss': { 'embedding_weight': 1.0,
'mask_loss_fn': 'lovasz_hinge',
'min_voxels': 2,
'name': 'se_vectorized_inter',
'seediness_weight': 1.0,
'smoothing_weight': 1.0},
'uresnet_ppn': { 'ppn': { 'classify_endpoints': True,
'data_dim': 3,
'downsample_ghost': True,
'filters': 16,
'model_name': 'ppn',
'model_path': '/sdf/home/l/ldomine/lartpc_mlreco3d_tutorials/book/data/weights_ppn3_snapshot-1999.ckpt',
'num_classes': 5,
'num_strides': 6,
'ppn1_size': 24,
'ppn2_size': 96,
'ppn_num_conv': 1,
'score_threshold': 0.5,
'spatial_size': 768,
'use_encoding': False,
'weight_ppn': 0.9},
'uresnet_lonely': { 'data_dim': 3,
'features': 2,
'filters': 16,
'freeze': False,
'ghost': True,
'leakiness': 0.0,
'num_classes': 5,
'num_strides': 6,
'spatial_size': 768,
'weight_loss': True}}},
'name': 'full_chain',
'network_input': ['input_data']},
'trainval': { 'checkpoint_step': 100,
'concat_result': [ 'seediness',
'margins',
'embeddings',
'fragments',
'fragments_seg',
'shower_fragments',
'shower_edge_index',
'shower_edge_pred',
'shower_node_pred',
'shower_group_pred',
'track_fragments',
'track_edge_index',
'track_node_pred',
'track_edge_pred',
'track_group_pred',
'particle_fragments',
'particle_edge_index',
'particle_node_pred',
'particle_edge_pred',
'particle_group_pred',
'particles',
'inter_edge_index',
'inter_node_pred',
'inter_edge_pred',
'node_pred_p',
'node_pred_type',
'flow_edge_pred',
'kinematics_particles',
'kinematics_edge_index',
'clust_fragments',
'clust_frag_seg',
'interactions',
'inter_cosmic_pred',
'node_pred_vtx',
'total_num_points',
'total_nonghost_points'],
'debug': False,
'gpus': [0],
'iterations': 652,
'log_dir': './log_trash',
'minibatch_size': -1,
'model_path': '/sdf/home/l/ldomine/lartpc_mlreco3d_tutorials/book/data/weights_full5_snapshot-999.cpkt',
'optimizer': {'args': {'lr': 0.001}, 'name': 'Adam'},
'report_step': 1,
'seed': 123,
'train': False,
'unwrapper': 'unwrap_3d_scn',
'weight_prefix': './weights_trash/snapshot'}}
Loading file: /sdf/home/l/ldomine/lartpc_mlreco3d_tutorials/book/data/wire_mpvmpr_2020_04_test_small.root
Loading tree sparse3d_reco
Loading tree sparse3d_reco_chi2
Loading tree sparse3d_pcluster_semantics_ghost
Loading tree cluster3d_pcluster
Loading tree particle_pcluster
Loading tree particle_mpv
Loading tree sparse3d_pcluster_semantics
Loading tree sparse3d_pcluster
Loading tree particle_corrected
Sequential(
(0): Sequential(
(0): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): NetworkInNetwork64->4
)
(1): OutputLayer()
)
ClusterCNN(
(input): Sequential(
(0): InputLayer()
(1): SubmanifoldConvolution 4->64 C7
)
(concat): JoinTable()
(add): AddTable()
(encoding_block): Sequential(
(0): Sequential(
(0): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 64->64 C3
(2): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 64->64 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 64->64 C3
(2): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 64->64 C3
)
)
(3): AddTable()
)
(1): Sequential(
(0): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 128->128 C3
(2): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 128->128 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 128->128 C3
(2): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 128->128 C3
)
)
(3): AddTable()
)
(2): Sequential(
(0): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 192->192 C3
(2): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 192->192 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 192->192 C3
(2): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 192->192 C3
)
)
(3): AddTable()
)
(3): Sequential(
(0): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 256->256 C3
(2): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 256->256 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 256->256 C3
(2): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 256->256 C3
)
)
(3): AddTable()
)
(4): Sequential(
(0): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 320->320 C3
(2): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 320->320 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 320->320 C3
(2): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 320->320 C3
)
)
(3): AddTable()
)
(5): Sequential(
(0): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 384->384 C3
(2): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 384->384 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 384->384 C3
(2): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 384->384 C3
)
)
(3): AddTable()
)
(6): Sequential(
(0): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(448,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 448->448 C3
(2): BatchNormLeakyReLU(448,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 448->448 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(448,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 448->448 C3
(2): BatchNormLeakyReLU(448,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 448->448 C3
)
)
(3): AddTable()
)
)
(encoding_conv): Sequential(
(0): Sequential(
(0): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Convolution 64->128 C2/2
)
(1): Sequential(
(0): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Convolution 128->192 C2/2
)
(2): Sequential(
(0): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Convolution 192->256 C2/2
)
(3): Sequential(
(0): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Convolution 256->320 C2/2
)
(4): Sequential(
(0): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Convolution 320->384 C2/2
)
(5): Sequential(
(0): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Convolution 384->448 C2/2
)
(6): Sequential()
)
(decoding_block): Sequential(
(0): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork768->384
(1): Sequential(
(0): BatchNormLeakyReLU(768,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 768->384 C3
(2): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 384->384 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 384->384 C3
(2): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 384->384 C3
)
)
(3): AddTable()
)
(1): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork640->320
(1): Sequential(
(0): BatchNormLeakyReLU(640,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 640->320 C3
(2): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 320->320 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 320->320 C3
(2): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 320->320 C3
)
)
(3): AddTable()
)
(2): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork512->256
(1): Sequential(
(0): BatchNormLeakyReLU(512,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 512->256 C3
(2): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 256->256 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 256->256 C3
(2): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 256->256 C3
)
)
(3): AddTable()
)
(3): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork384->192
(1): Sequential(
(0): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 384->192 C3
(2): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 192->192 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 192->192 C3
(2): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 192->192 C3
)
)
(3): AddTable()
)
(4): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork256->128
(1): Sequential(
(0): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 256->128 C3
(2): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 128->128 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 128->128 C3
(2): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 128->128 C3
)
)
(3): AddTable()
)
(5): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork128->64
(1): Sequential(
(0): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 128->64 C3
(2): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 64->64 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 64->64 C3
(2): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 64->64 C3
)
)
(3): AddTable()
)
)
(decoding_conv): Sequential(
(0): Sequential(
(0): BatchNormLeakyReLU(448,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 448->384 C2/2
)
(1): Sequential(
(0): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 384->320 C2/2
)
(2): Sequential(
(0): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 320->256 C2/2
)
(3): Sequential(
(0): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 256->192 C2/2
)
(4): Sequential(
(0): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 192->128 C2/2
)
(5): Sequential(
(0): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 128->64 C2/2
)
)
(decoding_block2): Sequential(
(0): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork768->384
(1): Sequential(
(0): BatchNormLeakyReLU(768,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 768->384 C3
(2): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 384->384 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 384->384 C3
(2): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 384->384 C3
)
)
(3): AddTable()
)
(1): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork640->320
(1): Sequential(
(0): BatchNormLeakyReLU(640,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 640->320 C3
(2): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 320->320 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 320->320 C3
(2): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 320->320 C3
)
)
(3): AddTable()
)
(2): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork512->256
(1): Sequential(
(0): BatchNormLeakyReLU(512,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 512->256 C3
(2): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 256->256 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 256->256 C3
(2): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 256->256 C3
)
)
(3): AddTable()
)
(3): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork384->192
(1): Sequential(
(0): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 384->192 C3
(2): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 192->192 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 192->192 C3
(2): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 192->192 C3
)
)
(3): AddTable()
)
(4): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork256->128
(1): Sequential(
(0): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 256->128 C3
(2): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 128->128 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 128->128 C3
(2): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 128->128 C3
)
)
(3): AddTable()
)
(5): Sequential(
(0): ConcatTable(
(0): NetworkInNetwork128->64
(1): Sequential(
(0): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 128->64 C3
(2): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 64->64 C3
)
)
(1): AddTable()
(2): ConcatTable(
(0): Identity()
(1): Sequential(
(0): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): SubmanifoldConvolution 64->64 C3
(2): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(3): SubmanifoldConvolution 64->64 C3
)
)
(3): AddTable()
)
)
(decoding_conv2): Sequential(
(0): Sequential(
(0): BatchNormLeakyReLU(448,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 448->384 C2/2
)
(1): Sequential(
(0): BatchNormLeakyReLU(384,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 384->320 C2/2
)
(2): Sequential(
(0): BatchNormLeakyReLU(320,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 320->256 C2/2
)
(3): Sequential(
(0): BatchNormLeakyReLU(256,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 256->192 C2/2
)
(4): Sequential(
(0): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 192->128 C2/2
)
(5): Sequential(
(0): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): Deconvolution 128->64 C2/2
)
)
(outputEmbeddings): Sequential(
(0): Sequential(
(0): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): NetworkInNetwork64->4
)
(1): OutputLayer()
)
(outputSeediness): Sequential(
(0): Sequential(
(0): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0.33)
(1): NetworkInNetwork64->1
)
(1): OutputLayer()
)
(tanh): Tanh()
(sigmoid): Sigmoid()
)
Total Number of Trainable Parameters = 175210432
Restoring weights for from /sdf/home/l/ldomine/lartpc_mlreco3d_tutorials/book/data/weights_full5_snapshot-999.cpkt...
INCOMPATIBLE KEYS!
['module.uresnet_lonely.bn.weight', 'module.uresnet_lonely.bn.bias', 'module.uresnet_lonely.bn.running_mean', 'module.uresnet_lonely.bn.running_var']
make sure your module is named
Done.
Restoring weights for ppn from /sdf/home/l/ldomine/lartpc_mlreco3d_tutorials/book/data/weights_ppn3_snapshot-1999.ckpt...
Done.
The output is hidden because it reprints the entire (lengthy) configuration. Feel free to take a look if you are curious!
Finally we run the chain for 1 iteration:
# Call forward to run the net, store the output in "res"
data, output = hs.trainer.forward(hs.data_io_iter)
Segmentation Accuracy: 0.9882
PPN Accuracy: 0.8556
Clustering Accuracy: 0.9465
Shower fragment clustering accuracy: 1.0000
Shower primary prediction accuracy: 0.0000
Track fragment clustering accuracy: 0.9907
Particle ID accuracy: -1.2915
Interaction grouping accuracy: 0.9609
Flow accuracy: 0.9841
Type accuracy: 0.7500
Momentum accuracy: -3.7679
Vertex position accuracy: 0.6862
Vertex score accuracy: 0.9111
Cosmic discrimination accuracy: 0.8696
Now we can play with data and output to visualize what we are interested in. Feel free to change the
entry index if you want to look at a different entry! We picked this one because it has a few Michel in it.
entry = 2
Let us grab the interesting quantities:
clust_label = data['cluster_label'][entry]
input_data = data['input_data'][entry]
segment_label = data['segment_label'][entry][:, -1]
Step 1: extract the semantic segmentation predictions.¶
Our gameplan is simple: first, get the UResNet semantic segmentation predictions from the chain output,
which are stored in output['segmentation']. These actually are softmax scores, so don’t forget to take
the argmax. Also remember, if our dataset has ghost points, we need to retrieve the ghost/non-ghost predictions
from UResNet. These are stored in output['ghost'].
ghost_mask = output['ghost'][entry].argmax(axis=1) == 0
segment_pred = output['segmentation'][entry].argmax(axis=1)
We can even take a look at a visualization to prepare for the next steps:
trace = []
edep = input_data[segment_label < 5]
trace+= scatter_points(input_data[segment_label < 5],markersize=1,color=segment_label[segment_label < 5], cmin=0, cmax=10, colorscale=plotly.colors.qualitative.D3)
trace[-1].name = 'True semantic labels (true no-ghost mask)'
trace+= scatter_points(input_data[ghost_mask],markersize=1,color=segment_pred[ghost_mask], cmin=0, cmax=10, colorscale=plotly.colors.qualitative.D3)
trace[-1].name = 'Predicted semantic labels (predicted no-ghost mask)'
fig = go.Figure(data=trace,layout=plotly_layout3d())
fig.update_layout(legend=dict(x=1.0, y=0.8))
iplot(fig)
Step 2: identify muons and electrons particles.¶
track_label = 1
michel_label = 2
Use the predictions that you retrieved and a simple clustering algorithm such as DBSCAN to isolate Michel electron particle instances (groups of voxels).
from sklearn.cluster import DBSCAN
track_mask = (segment_pred == track_label) & ghost_mask
michel_mask = (segment_pred == michel_label) & ghost_mask
michel_clusters = DBSCAN(eps=3, min_samples=1).fit_predict(data['input_data'][entry][michel_mask][:, :3])
print('Detected %d Michel candidates' % len(np.unique(michel_clusters[michel_clusters>-1])))
Detected 2 Michel candidates
Step 3: make sure the electron is touching the end of the muon track.¶
There can be many ways to do this check.
Is it touching a track-like particle?¶
First let’s make sure the Michel electron cluster candidates
are touching a track, i.e. next to track-like predicted voxels, within distance_threshold.
By the same occasion we’ll make sure the Michel candidates have a minimum voxel count.
distance_threshold = 3 # in voxels, to account for potential gaps
min_voxel_count = 10
from scipy.spatial.distance import cdist
track_voxels = data['input_data'][entry][track_mask][:, :3]
candidates = []
for i in np.unique(michel_clusters[michel_clusters>-1]):
candidate_idx = michel_clusters == i
candidate_voxels = data['input_data'][entry][michel_mask][candidate_idx][:, :3]
d = cdist(candidate_voxels, track_voxels)
if d.min() < distance_threshold and np.count_nonzero(candidate_idx) > min_voxel_count:
candidates.append(candidate_idx)
candidates = np.array(candidates)
print("Kept %d / %d Michel candidates" % (len(candidates), len(np.unique(michel_clusters[michel_clusters>-1]))))
Kept 2 / 2 Michel candidates
Wait, what if it was a misclassified Delta ray electron?¶
Delta ray electrons are knock off electrons that can happen along the trajectory of a muon, so if UResNet mispredicted the delta ray voxels as Michel voxels we would be wrong! To avoid that, let’s also make sure that the point of contact is at the end of the track. Again, many ways to do this check, this is just one possible heuristic.
radius = 10 # in voxels
track_clusters1 = DBSCAN(eps=1.9, min_samples=1).fit_predict(track_voxels)
candidates2 = []
for candidate in candidates:
candidate_voxels = data['input_data'][entry][michel_mask][candidate][:, :3]
d = cdist(candidate_voxels, track_voxels)
ci, ti = d.argmin()//d.shape[1], d.argmin()%d.shape[1]
d2 = cdist(track_voxels, [track_voxels[ti]])
far_voxels = d2.reshape((-1,)) > radius
# Is the track still in one piece?
track_clusters2 = DBSCAN(eps=1.9, min_samples=1).fit_predict(track_voxels[far_voxels])
# i.e., is the cluster count before and after the ablation the same
if len(np.unique(track_clusters1[track_clusters1>-1])) == len(np.unique(track_clusters2[track_clusters2>-1])):
candidates2.append(candidate)
candidates2 = np.array(candidates2)
print("Kept %d / %d Michel candidates" % (len(candidates2), len(np.unique(michel_clusters[michel_clusters>-1]))))
Kept 2 / 2 Michel candidates
Step 4: make a plot!¶
Let’s use the voxel count as a substitute for the reconstructed energy. This approximation is fairly accurate for shower-like particles. Make a histogram with Michel electron candidates voxel counts.
Since there are only so many Michel electrons
per entry, you will need to loop over several entries, possibly more than a batch size
worth of entries. Using the previous steps I wrote a function find_michel(data, output, entry)
which returns to me a list of candidates.
def find_michel(data, output, entry):
clust_label = data['cluster_label'][entry]
input_data = data['input_data'][entry]
segment_label = data['segment_label'][entry][:, -1]
ghost_mask = output['ghost'][entry].argmax(axis=1) == 0
segment_pred = output['segmentation'][entry].argmax(axis=1)
track_mask = (segment_pred == track_label) & ghost_mask
michel_mask = (segment_pred == michel_label) & ghost_mask
if np.count_nonzero(michel_mask) == 0:
return []
michel_clusters = DBSCAN(eps=3, min_samples=1).fit_predict(data['input_data'][entry][michel_mask][:, :3])
# Is it touching?
track_voxels = data['input_data'][entry][track_mask][:, :3]
candidates = []
for i in np.unique(michel_clusters[michel_clusters>-1]):
candidate_idx = michel_clusters == i
candidate_voxels = data['input_data'][entry][michel_mask][candidate_idx][:, :3]
d = cdist(candidate_voxels, track_voxels)
if d.min() < distance_threshold and np.count_nonzero(candidate_idx) > min_voxel_count:
candidates.append(candidate_idx)
candidates = np.array(candidates)
# Is it the end of the track?
track_clusters1 = DBSCAN(eps=1.9, min_samples=1).fit_predict(track_voxels)
candidates2 = []
for candidate in candidates:
candidate_voxels = data['input_data'][entry][michel_mask][candidate][:, :3]
d = cdist(candidate_voxels, track_voxels)
ci, ti = d.argmin()//d.shape[1], d.argmin()%d.shape[1]
d2 = cdist(track_voxels, [track_voxels[ti]])
far_voxels = d2.reshape((-1,)) > radius
# Is the track still in one piece?
track_clusters2 = DBSCAN(eps=1.9, min_samples=1).fit_predict(track_voxels[far_voxels])
# i.e., is the cluster count before and after the ablation the same
if len(np.unique(track_clusters1[track_clusters1>-1])) == len(np.unique(track_clusters2[track_clusters2>-1])):
candidates2.append(candidate)
candidates2 = np.array(candidates2)
return candidates2
And I can run it on each entry across several iterations.
iterations = 10
all_candidates = []
for iteration in range(iterations):
data, output = hs.trainer.forward(hs.data_io_iter)
for entry in range(len(data['input_data'])):
print("Iteration %d / Entry %d " % (iteration, entry))
entry_candidates = find_michel(data, output, entry)
all_candidates.extend(entry_candidates)
michel_voxel_count = [np.count_nonzero(candidate) for candidate in all_candidates]
Segmentation Accuracy: 0.9886
PPN Accuracy: 0.8113
Clustering Accuracy: 0.9274
Shower fragment clustering accuracy: 0.8704
Shower primary prediction accuracy: 0.8667
Track fragment clustering accuracy: 0.9484
Particle ID accuracy: 0.3011
Interaction grouping accuracy: 0.9531
Flow accuracy: 0.9883
Type accuracy: 1.0000
Momentum accuracy: -0.6601
Vertex position accuracy: 0.7584
Vertex score accuracy: 0.9487
Cosmic discrimination accuracy: 1.0000
Iteration 0 / Entry 0
Iteration 0 / Entry 1
Iteration 0 / Entry 2
Iteration 0 / Entry 3
Segmentation Accuracy: 0.9761
PPN Accuracy: 0.7882
Clustering Accuracy: 0.9401
Shower fragment clustering accuracy: 0.8773
Shower primary prediction accuracy: 1.0000
Track fragment clustering accuracy: 0.9863
Particle ID accuracy: -0.1689
Interaction grouping accuracy: 0.9384
Flow accuracy: 0.9925
Type accuracy: 0.8000
Momentum accuracy: -1.9959
Vertex position accuracy: 0.8315
Vertex score accuracy: 0.9200
Cosmic discrimination accuracy: 0.9444
Iteration 1 / Entry 0
Iteration 1 / Entry 1
Iteration 1 / Entry 2
Iteration 1 / Entry 3
Segmentation Accuracy: 0.9668
PPN Accuracy: 0.7786
Clustering Accuracy: 0.9744
Shower fragment clustering accuracy: 0.8658
Shower primary prediction accuracy: 1.0000
Track fragment clustering accuracy: 0.9967
Particle ID accuracy: 0.0699
Interaction grouping accuracy: 1.0000
Flow accuracy: 0.9923
Type accuracy: 0.8261
Momentum accuracy: -1.4258
Vertex position accuracy: 0.7615
Vertex score accuracy: 1.0000
Cosmic discrimination accuracy: 1.0000
Iteration 2 / Entry 0
Iteration 2 / Entry 1
Iteration 2 / Entry 2
Iteration 2 / Entry 3
Segmentation Accuracy: 0.9803
PPN Accuracy: 0.7738
Clustering Accuracy: 0.9123
Shower fragment clustering accuracy: 0.8953
Shower primary prediction accuracy: 1.0000
Track fragment clustering accuracy: 0.9967
Particle ID accuracy: -0.3251
Interaction grouping accuracy: 0.8430
Flow accuracy: 0.9956
Type accuracy: 1.0000
Momentum accuracy: -2.1867
Vertex position accuracy: 0.7257
Vertex score accuracy: 1.0000
Cosmic discrimination accuracy: 1.0000
Iteration 3 / Entry 0
Iteration 3 / Entry 1
Iteration 3 / Entry 2
Iteration 3 / Entry 3
Segmentation Accuracy: 0.9500
PPN Accuracy: 0.6980
Clustering Accuracy: 0.9155
Shower fragment clustering accuracy: 0.8557
Shower primary prediction accuracy: 0.6667
Track fragment clustering accuracy: 0.9806
Particle ID accuracy: -0.2914
Interaction grouping accuracy: 0.9556
Flow accuracy: 0.9852
Type accuracy: 0.6471
Momentum accuracy: -2.2567
Vertex position accuracy: 0.7736
Vertex score accuracy: 0.9167
Cosmic discrimination accuracy: 0.8750
Iteration 4 / Entry 0
Iteration 4 / Entry 1
Iteration 4 / Entry 2
Iteration 4 / Entry 3
Segmentation Accuracy: 0.9878
PPN Accuracy: 0.7351
Clustering Accuracy: 0.9579
Shower fragment clustering accuracy: 0.9400
Shower primary prediction accuracy: 1.0000
Track fragment clustering accuracy: 0.9645
Particle ID accuracy: -0.2757
Interaction grouping accuracy: 0.9795
Flow accuracy: 0.9897
Type accuracy: 0.8500
Momentum accuracy: -2.3003
Vertex position accuracy: 0.7383
Vertex score accuracy: 0.9556
Cosmic discrimination accuracy: 1.0000
Iteration 5 / Entry 0
Iteration 5 / Entry 1
Iteration 5 / Entry 2
Iteration 5 / Entry 3
Segmentation Accuracy: 0.9660
PPN Accuracy: 0.7060
Clustering Accuracy: 0.9178
Shower fragment clustering accuracy: 0.8923
Shower primary prediction accuracy: 1.0000
Track fragment clustering accuracy: 0.9898
Particle ID accuracy: 0.5955
Interaction grouping accuracy: 0.9905
Flow accuracy: 0.9905
Type accuracy: 0.9091
Momentum accuracy: 0.0253
Vertex position accuracy: 0.7991
Vertex score accuracy: 0.9231
Cosmic discrimination accuracy: 1.0000
Iteration 6 / Entry 0
Iteration 6 / Entry 1
Iteration 6 / Entry 2
Iteration 6 / Entry 3
Segmentation Accuracy: 0.9818
PPN Accuracy: 0.7469
Clustering Accuracy: 0.9556
Shower fragment clustering accuracy: 0.9247
Shower primary prediction accuracy: 0.9375
Track fragment clustering accuracy: 0.9912
Particle ID accuracy: -0.3921
Interaction grouping accuracy: 0.9638
Flow accuracy: 0.9962
Type accuracy: 1.0000
Momentum accuracy: -2.5438
Vertex position accuracy: 0.7829
Vertex score accuracy: 0.9661
Cosmic discrimination accuracy: 1.0000
Iteration 7 / Entry 0
Iteration 7 / Entry 1
Iteration 7 / Entry 2
Iteration 7 / Entry 3
Segmentation Accuracy: 0.9691
PPN Accuracy: 0.7161
Clustering Accuracy: 0.9176
Shower fragment clustering accuracy: 0.9113
Shower primary prediction accuracy: 1.0000
Track fragment clustering accuracy: 0.9551
Particle ID accuracy: 0.2048
Interaction grouping accuracy: 0.9959
Flow accuracy: 0.9939
Type accuracy: 0.7391
Momentum accuracy: -1.1031
Vertex position accuracy: 0.7841
Vertex score accuracy: 0.9070
Cosmic discrimination accuracy: 1.0000
Iteration 8 / Entry 0
Iteration 8 / Entry 1
Iteration 8 / Entry 2
Iteration 8 / Entry 3
Segmentation Accuracy: 0.9787
PPN Accuracy: 0.8002
Clustering Accuracy: 0.9305
Shower fragment clustering accuracy: 0.8484
Shower primary prediction accuracy: 1.0000
Track fragment clustering accuracy: 0.9682
Particle ID accuracy: 0.3651
Interaction grouping accuracy: 0.8785
Flow accuracy: 0.9883
Type accuracy: 0.8261
Momentum accuracy: -0.4545
Vertex position accuracy: 0.6692
Vertex score accuracy: 0.9167
Cosmic discrimination accuracy: 0.9500
Iteration 9 / Entry 0
Iteration 9 / Entry 1
Iteration 9 / Entry 2
Iteration 9 / Entry 3
import matplotlib.pyplot as plt
import seaborn
seaborn.set(rc={
'figure.figsize':(15, 10),
})
seaborn.set_context('talk')
plt.hist(michel_voxel_count)
plt.xlabel("Voxel count")
plt.ylabel("Michel electron candidates")
Text(0, 0.5, 'Michel electron candidates')
Optional: how well did we do?¶
You can keep the exercise going by looking at the true Michel candidates (same heuristic, using the true semantic labels), matching them to the predicted Michel candidates and computing some metrics such as purity (fraction of predicted candidates that are matched) or efficiency (fraction of true Michel that find a match).
Other readings¶
Michel Electron Reconstruction Using Cosmic-Ray Data from the MicroBooNE LArTPC https://lss.fnal.gov/archive/2017/pub/fermilab-pub-17-090-nd.pdf
- AAA+17
R Acciarri, C Adams, R An, J Anthony, J Asaadi, M Auger, L Bagby, S Balasubramanian, B Baller, C Barnes, and others. Michel electron reconstruction using cosmic-ray data from the microboone lartpc. Journal of instrumentation, 12(09):P09014, 2017. https://lss.fnal.gov/archive/2017/pub/fermilab-pub-17-090-nd.pdf.